Test Workflows
The Workflow Designer provides a built-in test feature that allows you to validate workflow logic before deployment. When testing, a chat interface opens directly within the designer, offering an interactive environment to execute workflows in real time and observe results as they occur.
You can supply an initial execution state to simulate specific scenarios, while node snapshots capture state changes at each step, giving full visibility into how data flows through the workflow.
How It Works
When you click the Test button in the Workflow Designer, a chat panel opens next to your workflow. This panel lets you interact with your workflow directly, sending messages and seeing how the system responds in real time.
Testing runs your workflow without affecting live data. Each test is independent, so no conversation history is saved, and your production workflows remain untouched. The workflow you’re currently designing is used for testing, so you can check unsaved changes before making them permanent.
As the workflow runs, step-by-step updates are shown for each node. These snapshots display what the workflow is doing, the data it’s handling, and the outputs it produces. This gives you a clear view of how your workflow processes information and helps identify any issues before deployment.
Test in Workflow Designer
The Workflow Designer test panel provides:
- Chat interface: Send messages and receive workflow responses in real time
- Canvas visualization: Watch nodes execute in sequence on the workflow canvas
- State inspection: View state snapshots after each node completes
- Error display: See errors and warnings as they occur
The test panel behaves like the production chatbot but runs against the current workflow definition in the designer, including any unsaved changes.
Test with Initial State
The initial state feature lets you start a workflow test with predefined values instead of beginning from scratch. This is helpful when you want to simulate real situations, reuse known inputs, or test how the workflow behaves under specific conditions.
Rather than waiting for data to be generated during execution, you can provide it upfront and see how the workflow responds.
What Initial State Does
When you define an initial state, the system adds your values to the workflow’s internal data store. These values are then available to all steps that run afterward.
You can use initial state to:
-
Test workflows with known input values
-
Simulate data coming from earlier steps
-
Check how nodes behave with unusual or edge-case inputs
-
Bypass trigger nodes by pre-filling the data they would normally collect, allowing you to test downstream nodes directly
For example, if a trigger node normally prompts the user for a question and stores it in
state.data.user_query, you can provide{"user_query": "What is our vacation policy?"}as initial state. The workflow starts with this value already set, so you can immediately test how downstream nodes process the query without waiting for trigger input. -
Test subgraphs on their own by providing the data they would normally receive from a main workflow. Subgraphs run as part of a larger workflow and expect certain data to be passed in. When you open a subgraph in the Workflow Designer and want to test it directly, you can use initial state to supply the required data yourself. This lets you verify that the subgraph works correctly before connecting it to the main workflow.
Types of Initial State Data
The initial state editor supports three types of information:
-
State Data Custom key-value pairs that are stored as workflow data and can be referenced by later steps.
-
Messages A predefined conversation history, including user, AI, or system messages.
-
System Variables Overrides for system-level information, such as user or tenant details.
Example Initial State
{
"user_query": "What is our vacation policy?",
"document_id": "12345",
"max_results": 10
}
Once this data is provided, workflow steps can directly use it. For example:
${state.data.user_query}returns"What is our vacation policy?"${state.data.document_id}returns"12345"
This allows you to test how the workflow behaves without manually entering the same inputs every time.
Execution Results
While a test is running, the Workflow Designer shows clear, step-by-step feedback so you can understand exactly what is happening.
Node Snapshots
After each step finishes, the system captures a snapshot that shows:
-
Which step just ran
-
The current workflow data at that moment
-
How long the step took to complete
-
Any output produced by the step
These snapshots make it easier to follow data flow and identify where something might not be working as expected.
Execution Events
As the test progresses, the system reports key execution milestones, such as:
-
When execution starts
-
When each step completes
-
When the workflow finishes successfully
-
When an error occurs
This helps you track the overall progress of the test from start to finish.
Error Handling
When errors occur during testing:
- The error message displays in the chat panel
- The failing node highlights on the canvas
- Execution stops at the point of failure
- State snapshots up to the error remain visible for debugging
Test in AI Chatbot
Workflows assigned to agents can also be tested through the AI Chatbot for end-to-end validation:
- The chatbot uses the saved workflow configuration
- Conversation history persists across messages
- Agent settings (system prompts, knowledge base) apply to the test
- Citations and source references appear as configured
Chatbot testing validates the complete user experience, including agent behavior and response formatting.
Best Practices
Follow these best practices to ensure your workflows behave as expected before deployment:
-
Test with a variety of input values to confirm the workflow responds correctly in different situations.
-
Use the initial state feature to focus on specific steps without needing to run the entire workflow from start to finish.
-
Review node snapshots after each step to verify that data is being processed and passed correctly.
-
Include edge cases in your testing, such as empty inputs, long text entries, or special characters.
-
Verify error handling by intentionally providing incorrect or incomplete data and observing how the workflow responds.
-
Compare results from the Workflow Designer test with responses in the AI Chatbot to ensure consistent behavior before going live.